Blending in the Hub
نویسندگان
چکیده
Conceptual blending has been employed very successfully to understand the process of concept invention, studied particularly within cognitive psychology and linguistics. However, despite this influential research, within computational creativity little effort has been devoted to fully formalise these ideas and to make them amenable to computational techniques. We here present the basic formalisation of conceptual blending, as sketched by the late Joseph Goguen, and show how the Distributed Ontology Language DOL can be used to declaratively specify blending diagrams. Moreover, we discuss in detail how the workflow and creative act of generating and evaluating a new, blended concept can be managed and computationally supported within Ontohub, a DOL-enabled theory repository with support for a large number of logical languages and formal linking constructs. Concept Invention via Blending In the general methodology of conceptual blending introduced by Fauconnier and Turner (2003), the blending of two thematically rather different conceptual spaces yields a new conceptual space with emergent structure, selectively combining parts of the given spaces whilst respecting common structural properties.1 The ‘imaginative’ aspect of blending is summarised as follows in Turner (2007): [. . . ] the two inputs have different (and often clashing) organising frames, and the blend has an organising frame that receives projections from each of those organising frames. The blend also has emergent structure on its own that cannot be found in any of the inputs. Sharp differences between the organising frames of the inputs offer the possibility of rich clashes. Far from blocking the construction of the network, such clashes offer challenges to the imagination. The resulting blends can turn out to be highly imaginative. A classic example for this is the blending of the concepts house and boat, yielding as most straightforward blends the The usage of the term ‘conceptual space’ in blending theory is not to be confused with the usage established by Gärdenfors (2000). concepts of a houseboat and a boathouse, but also an amphibious vehicle (Goguen and Harrell, 2009). In the almost unlimited space of possibilities for combining existing ontologies to create new ontologies with emergent structure, conceptual blending can be built on to provide a structural and logic-based approach to ‘creative’ ontological engineering. This endeavour primarily raises the following two challenges: (1) when combining the terminologies of two ontologies, the shared semantic structure is of particular importance to steer possible combinations. This shared semantic structure leads to the notion of base ontology, which is closely related to the notion of ‘tertium comparationis’ found in the classic rhetoric and poetic theories, but also in more recent cognitive theories of metaphor (see, e.g., Jaszczolt (2003)); (2) having established a shared semantic structure, there is typically still a huge number of possibilities that can capitalise on this information in the combination process: here, structural optimality principles as well as ontology evaluation techniques take on a central role in selecting interesting blends. We believe that the principles governing ontological blending are quite distinct from the rather informal principles employed in blending phenomena in language or poetry, or the rather strict principles ruling blending in mathematics, in particular in the way formal inconsistencies are dealt with. For instance, whilst blending in poetry might be particularly inventive or imaginative when the structure of the basic categories found in the input spaces is almost completely ignored, and whilst the opposite, i.e., rather strict adherence to sort structure, is important in areas such as mathematics in order to generate meaningful blends2, ontological blending is situated somewhere in the middle: re-arrangement and new combination of basic categories can be rather interesting, but has to be finely controlled through corresponding interfaces, often regulated by or related to choices found in foundational or upper ontologies. For instance when creating the theory of transfinite cardinals by blending the perfective aspect of counting up to any fixed finite number with the imperfective aspect of ‘endless counting’ (Núñez, 2005). The core contributions of the paper can be summarised as follows.3 We: • sketch the logical analysis of conceptual blending in terms of blending diagrams and colimits, as originally proposed by Joseph Goguen, and give an abstract definition of ontological blendoids capturing the basic intuitions of conceptual blending in the ontological setting; • provide a formal language for declaratively specifying blending diagrams by employing the OWL4 fragment of the distributed ontology language DOL for blending. This provides a structured approach to ontology languages and combines the simplicity and good tool support for OWL with the more complex blending facilities of OBJ3 (Goguen and Malcolm, 1996) or Haskell (Kuhn, 2002); • discuss the capabilities of the Ontohub/Hets ecosystem with regard to collaboratively managing, creating, and evaluating blended concepts and theories; this includes an investigation of the evaluation problem in blending, together with a discussion of structural optimality principles and current automated reasoning support. We close with a detailed discussion of open problems and future work. Blending Computationalised Goguen has created the field of algebraic semiotics which logically formalises the structural aspects of semiotic signs, sign systems, and their mappings (Goguen, 1999). In Goguen and Harrell (2009), algebraic semiotics has been applied to user interface design and blending. Algebraic semiotics does not claim to provide a comprehensive formal theory of blending—indeed, Goguen and Harrell admit that many aspects of blending, in particular concerning the meaning of the involved notions, as well as the optimality principles for blending, cannot be captured formally. However, the structural aspects can be formalised and provide insights into the space of possible blends. Goguen defines semiotic systems to be algebraic theories that can be formulated by using the algebraic specification language OBJ (Goguen and Malcolm, 1996). Moreover, a special case of a semiotic system is a conceptual space: it consists only of constants and relations, one sort, and axioms that define that certain relations hold on certain instances. As we focus on standard ontology languages, namely OWL and first-order logic, we here replace the logical language OBJ. As structural aspects in the ontology language are necessary for blending, we augment these languages with structuring mechanisms known from algebraic specification theory (Kutz et al., 2008). This allows to translate most parts of Goguen’s theory to these ontology languages. Goguen’s main insight has been that semiotic systems and conceptual spaces can be related via morphisms, and that blending is comparable to colimit computation, a construction that abstracts the operation of disjoint unions modulo This paper elaborates on ideas first introduced in Hois et al. (2010); detailed technical definitions are given in Kutz et al. (2012). With ‘OWL’ we refer to OWL 2 DL, see http://www.w3. org/TR/owl2-overview/ base morphisms O1 O2 B Base Ontology Blendoid Input 1 Input 2 blendoid morphisms Figure 1: The basic integration network for blending: concepts in the base ontology are first refined to concepts in the input ontologies and then selectively blended into the blendoid. the identification of certain parts, explained in more detail below. In particular, the blending of two concepts is often a pushout (also called a blendoid in this context). Some basic definitions:5 Non-logical symbols are grouped into signatures, which for our purposes can be regarded as collections of kinded symbols (e.g. concept names, relation names). Signature morphisms are maps between signatures that preserve (at least) kinds of symbols (i.e. map concept names to concept names, relations to relations, etc.). A theory or ontology pairs a signature with a set of sentences over that signature, and an theory morphism (or interpretation) between two theories is just a signature morphism between the underlying signatures that preserves logical consequence, that is, ρ : T1 → T2 is a theory morphism if T2 |= ρ(T1), i.e. all the translations of sentences of T1 along ρ follow from T2. This construction is completely logic independent. Signature/theory morphisms are an essential ingredient for describing conceptual blending in a logical way. We now give a general definition of ontological blending capturing the basic intuition that a blend of input ontologies shall partially preserve the structure imposed by base ontologies, but otherwise be an almost arbitrary extension or fragment of the disjoint union of the input ontologies with appropriately identified base space terms. For the following definition, which we first introduced in Kutz et al. (2012), a diagram consists of a set of ontologies and a set of morphisms between them. The colimit of a diagram is similar to a disjoint union of its ontologies, with some identifications of shared parts as specified by the morphisms in the diagram. We refrain from presenting the category-theoretic definition here (which can be found in Note that these definitions apply to OWL, but also to many other logics. Indeed, they apply to any logic formalised as an institution (Goguen and Burstall, 1992). Adámek, Herrlich, and Strecker (1990)), but explain the colimit operation using the examples below. Definition 1 (Ontological Base Diagram) An ontological base diagram is a diagram D for which the minimal nodes (Bi)i∈Dmin⊆|D| are called base ontologies, the maximal nodes (Ij)j∈Dmax⊆|D| called input ontologies, and where the theory morphisms μij : Bi → Ij are called the base morphisms. If there are exactly two inputs I1, I2, and one base B, the diagram D is called classical and has the shape of a V. In this case, B is also called the tertium comparationis. Fig. 1 illustrates the basic, classical case of an ontological blending diagram. The lower part of the diagram shows the base space (tertium), i.e. the common generalisation of the two input spaces, which is connected to these via total (theory) morphisms, the base morphisms. The newly invented concept is at the top of this diagram, and is computed from the base diagram via a colimit. More precisely, any consistent subset of the colimit of the base diagram may be seen as a newly invented concept, a blendoid (a more precise definition of this notion is given in Kutz et al. (2012)). Note that, in general, ontological blending can deal with more than one base and two input ontologies. Computing the Tertium Comparationis To find candidates for base ontologies that could serve for the generation of ontological blendoids, much more shared semantic structure is required than the surface similarities that alignment approaches rely on. The common structural properties of the input ontologies that are encoded in the base ontology are typically of a more abstract nature. The standard example here relies on image schemata, such as the notion of a container (see e.g. Kuhn (2002)). Thus, in particular, foundational ontologies can support such selections. In analogical reasoning, ‘structure’ is (partially) mapped from a source domain to a target domain (Forbus, Falkenhainer, and Gentner, 1989; Schwering et al., 2009). Therefore, intuitively the operation of computing a base ontology can thus be seen as a bi-directional search for analogy or generalisation into a base ontology together with the corresponding mappings. Providing efficient means for finding a number of suitable such candidate generalisations is essential to making the entire blending process computationally feasible. Consider the example of blending ‘house’ with ‘boat’ discussed below in detail: even after fixing the base ontology itself, guessing the right mappings into the input ontologies means guessing within a space of approximately 1.4 Billion signature morphisms. Three promising candidates for finding generalisations are: (1) Ontology intersection: Normann (2008) has studied the automatisation of theory interpretation search for formalised mathematics, implemented as part of the Heterogeneous Tool Set (HETS, see below). Kutz and Normann (2009) applied these ideas to ontologies by using the ontologies’ axiomatisations for finding their shared structure. Accidental naming of concept and role names is deliberately ignored and such names are treated as arbitrary symbols (i.e., any concept may be matched with any other). By computing mutual theory interpretations between the inputs, the method allows to compute a base ontology as an intersection of the input ontologies together with corresponding theory morphisms. While this approach can be efficiently applied to ontologies with non-trivial axiomatisations, lightweight ontologies are less applicable, e.g., ‘intersecting’ a smaller taxonomy with a larger one clearly results in a huge number of possible taxonomy matches (Kutz and Normann, 2009). In this case, the following techniques are more appropriate. (2) Structure-based ontology matching: matching and alignment approaches are often restricted to find simple correspondences between atomic entities of the ontology vocabulary. In contrast, work such as (Ritze et al., 2009; Walshe, 2012) focuses on defining a number of complex correspondence patterns that can be used together with standard alignments in order to relate complex expressions between two input ontologies. For instance, the ‘Class by Attribute Type Pattern’ may be employed to claim the equivalence of the atomic concept PositiveReviewedPaper in ontology O1 with the complex concept ∃hasEvaluation.Positive of O2. Such an equivalence can be taken as an axiom of the base ontology; note, however, that it could typically not be found by intersecting the input ontologies. Giving such a library of design patterns may be seen as a variation of the idea of using image schemata. (3) Analogical Reasoning: Heuristic-driven theory projection is a logic-based technique for analogical reasoning that can be employed for the task of computing a common generalisation of input theories. Schwering et al. (2009) establish an analogical relation between a source theory and a target theory (both first-order) by computing a common generalisation (called ‘structural description’). They implement this by using anti-unification (Plotkin, 1970). A typical example is to find a generalisation (base ontology) formalising the structural commonalities between the Rutherford atomic model and a model of the solar system. This process may be assisted by a background knowledge base (in the ontological setting, a related domain or foundational ontology). Indeed, this idea has been further developed in Martinez et al. (2011). Selecting the Blendoids: Optimality Principles Having a common base ontology (computed or given), there is typically a large number of possible blendoids. For example, even in the rather simple case of combining House and Boat, allowing for blendoids which only partially maintain structure (called non-primary blendoids in Goguen and Harrell (2009)), i.e., where any subset of the axioms may be propagated to the resulting blendoid, the number of possible blendoids is in the magnitude of 1000. Clearly, from an ontological viewpoint, the overwhelming majority of these candidates is rather meaningless. A ranking therefore needs to be applied on the basis of specific ontological principles. In conceptual blending theory, a number of optimality principles are given in an informal and heuristic style (Fauconnier and Turner, 1998, 2003). While they provide useful guidelines for evaluating natural language blends, they do not suggest a direct algorithmic implementation, as also analysed in Goguen and Harrell (2009). However, the importance of designing computational versions of optimality principles has been realised early on, and one such attempt may be found in the work of Pereira and Cardoso (2003), who proposed an implementation of the eight optimality principles presented in Fauconnier and Turner (1998) based on quantitative metrics for their more lightweight logical formalisation of blending. Such metrics, though, are not directly applicable to more expressive languages such as OWL or first-order logic. Moreover, the standard blending theory of Fauconnier and Turner (2003) does not assign types, which might make sense in the case of linguistic blends where type information is often ignored. A typical example of a type mismatch in language is the operation of personification, e.g., turning a boat into an ‘inhabitant’ of the ‘boathouse’. However, in the case of blending in mathematics or ontology, this loss of information is often rather unacceptable: on contrary, a fine-grained control of type or sort information may be of the utmost importance. Optimality principles for ontological blending are of two kinds. (1) purely structural/logical principles: these extend and refine the criteria as given in Goguen and Harrell (2009), namely degree of commutativity of the blend diagram, type casting (preservation of taxonomical structure), degree of partiality (of signature morphisms), and degree of axiom preservation. In the context of OWL, typing needs to be replaced with preservation of specific axioms encoding the taxonomy. (2) heuristic principles: these include introducing preference orders on morphisms (an idea that Goguen labelled 3/2 pushouts (Goguen, 1999)) reflecting their ‘quality’ e.g. measured in terms of degree of type violation; specific ontological principles, e.g. adherence to the OntoClean methodology (Guarino and Welty, 2002), or general ontology evaluation techniques such as competency questions, further discussed below. Another set of heuristics is quantitative, statistical metrics, similar in style to those proposed in Pereira and Cardoso (2003). The Distributed Ontology Language DOL The distributed ontology language DOL is an ideal formal language for specifying both ontologies, base diagrams, and their blends. DOL is a metalanguage in the sense that it enables the reuse of existing ontologies (written in some ontology language like OWL or Common Logic) as building blocks for new ontologies and, further, allows to specify intended relationships between ontologies. One important feature of DOL is the ability to combine ontologies that are written in different languages without changing their semantics. DOL is going to be submitted as response to the Object Management Group’s (OMG) Ontology, Model and Specification Integration and Interoperability (OntoIOp) Request For Proposal.6 http://www.omg.org/cgi-bin/doc?ad/ 2013-12-02 In this section, we introduce DOL only informally. A formal specification of the language and its model theoretic semantics can be found in Mossakowski et al. (2013). For the purpose of ontology blending the following features of DOL are relevant: • DOL library. A DOL library consists of basic and structured ontologies and ontology interpretations. A basic ontology is an ontology written in some ontology language (e.g., OWL or Common Logic). A structured ontology builds on basic ontologies with the help of ontology translations, ontology unions, and symbol hiding. • ontology translation (written O1 with σ). A translation takes an ontology O1 and a renaming function (technically, signature morphism) σ. The result of a translation is an ontology O2, which differs from the ontology O1 only by substituting the symbols as specified by the renaming function. • ontology union (written O1 and O2). The union of two ontologies O1 and O2 is a new ontology O3, which combines the axioms of both ontologies. • symbol hiding (written O1 hide {s1, ..., sn}). A symbol hiding takes an ontology O1 and a set of symbols s1, ..., sn . The result of the hiding is a new ontology O2, which is the result of ‘removing’ the symbols s1, ..., sn from the signature of ontology O1. Nevertheless, O2 keeps all semantic constraints from O1. • ontology interpretation (written interpretation INT_NAME : O1 to O2 = σ). An ontology interpretation is a claim about the relationship between two ontologies O1 and O2, giving some renaming function σ. It states that all the constraints that are the result of translating O1 with σ can be proven by O2. Some additional features that are necessary for blending will be introduced in the next section. Formalising Blending in DOL The novelty proposed by DOL is that the user can specify the base diagram of the blendoid. This is a crucial task, as the resulting blendoid depends on the dependencies between symbols that are stored in the diagram. Ontohub, our web platform and repository engine for managing distributed heterogeneous ontologies and discussed in more detail below, is able to use the specification of a base diagram to automatically generate the colimit-blendoid. In this section, we illustrate the specification of base diagrams in DOL and the resulting blendoids by blending house and boat to houseboat and boathouse. The main inputs for the blendings consist of two ontologies, one for HOUSE and the other for BOAT. We adapted them from Goguen and Harrell (2009) but gave a stronger axiomatisation, making them more realistic. The purpose of this exercise is to show, using this classic blend, that our By approximation, one could consider O2 as the ontology that is the result of existentially quantifying s1, ..., sn in O1. framework allows to blend in a generic way complex ontological theories, thus not being restricted theoretically to any particular domain or even logical language. Fig. 2 shows the ontology for HOUSE in OWL Manchester Syntax.8
منابع مشابه
Flexibility Enhancement of Energy Delivery Systems through Smart Operation of Micro Energy Hub
In this paper smart operation of micro energy hub is presented. Energy hub system consists of certain energy hubs and interconnectors that are coordinated by energy hub system operator. An energy hub contains several converters and storages to serve demanded services in most efficient manner from available energy carriers. In this paper, the flexibility of energy delivery point is enhanced u...
متن کاملCapacitated Single Allocation P-Hub Covering Problem in Multi-modal Network Using Tabu Search
The goals of hub location problems are finding the location of hub facilities and determining the allocation of non-hub nodes to these located hubs. In this work, we discuss the multi-modal single allocation capacitated p-hub covering problem over fully interconnected hub networks. Therefore, we provide a formulation to this end. The purpose of our model is to find the location of hubs and the ...
متن کاملORE extraction and blending optimization model in poly- metallic open PIT mines by chance constrained one-sided goal programming
Determination a sequence of extracting ore is one of the most important problems in mine annual production scheduling. Production scheduling affects mining performance especially in a poly-metallic open pit mine with considering the imposed operational and physical constraints mandated by high levels of reliability in relation to the obtained actual results. One of the important operational con...
متن کاملDesigning Incomplete Hub Location-routing Network in Urban Transportation Problem
In this paper, a comprehensive model for hub location-routing problem is proposed which no network structure other than connectivity is imposed on backbone (i.e. network between hub nodes) and tributary networks (i.e. networks which connect non-hub nodes to hub nodes). This model is applied in public transportation, telecommunication and banking networks. In this model locating and routing is c...
متن کاملDynamic Hub Covering Problem with Flexible Covering Radius
Abstract One of the basic assumptions in hub covering problems is considering the covering radius as an exogenous parameter which cannot be controlled by the decision maker. Practically and in many real world cases with a negligible increase in costs, to increase the covering radii, it is possible to save the costs of establishing additional hub nodes. Change in problem parameters during the pl...
متن کاملارائه یک روش ابتکاری ترکیبی مبتنی بر الگوریتم ژنتیک برای حل مسأله هاب پوششی در حالت فازی
Hub location problem is one of the new issues in location problems. This kind of location problem is widely used in many transportation and telecommunication networks. Hubs are facilities that serve as transshipment and switching point to consolidate flows at certain locations for transportation,airline and postal systems so they are vital elements of such these networks. The location and numb...
متن کامل